35 research outputs found

    Finite-time stochastic input-to-state stability and observer-based controller design for singular nonlinear systems

    Get PDF
    This paper investigated observer-based controller for a class of singular nonlinear systems with state and exogenous disturbance-dependent noise. A new sufficient condition for finite-time stochastic input-to-state stability (FTSISS) of stochastic nonlinear systems is developed. Based on the sufficient condition, a sufficient condition on impulse-free and FTSISS for corresponding closed-loop error systems is provided. A linear matrix inequality condition, which can calculate the gains of the observer and state-feedback controller, is developed. Finally, two simulation examples are employed to demonstrate the effectiveness of the proposed approaches

    The weight distribution of a class of p-ary cyclic codes

    Get PDF
    AbstractFor an odd prime p and two positive integers n⩾3 and k with ngcd(n,k) being odd, the paper determines the weight distribution of a class of p-ary cyclic codes C over Fp with nonzeros α−1, α−(pk+1) and α−(p3k+1), where α is a primitive element of Fpn

    Probability-based Global Cross-modal Upsampling for Pansharpening

    Full text link
    Pansharpening is an essential preprocessing step for remote sensing image processing. Although deep learning (DL) approaches performed well on this task, current upsampling methods used in these approaches only utilize the local information of each pixel in the low-resolution multispectral (LRMS) image while neglecting to exploit its global information as well as the cross-modal information of the guiding panchromatic (PAN) image, which limits their performance improvement. To address this issue, this paper develops a novel probability-based global cross-modal upsampling (PGCU) method for pan-sharpening. Precisely, we first formulate the PGCU method from a probabilistic perspective and then design an efficient network module to implement it by fully utilizing the information mentioned above while simultaneously considering the channel specificity. The PGCU module consists of three blocks, i.e., information extraction (IE), distribution and expectation estimation (DEE), and fine adjustment (FA). Extensive experiments verify the superiority of the PGCU method compared with other popular upsampling methods. Additionally, experiments also show that the PGCU module can help improve the performance of existing SOTA deep learning pansharpening methods. The codes are available at https://github.com/Zeyu-Zhu/PGCU.Comment: 10 pages, 5 figure

    New Interpretations of Normalization Methods in Deep Learning

    Full text link
    In recent years, a variety of normalization methods have been proposed to help train neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc. However, mathematical tools to analyze all these normalization methods are lacking. In this paper, we first propose a lemma to define some necessary tools. Then, we use these tools to make a deep analysis on popular normalization methods and obtain the following conclusions: 1) Most of the normalization methods can be interpreted in a unified framework, namely normalizing pre-activations or weights onto a sphere; 2) Since most of the existing normalization methods are scaling invariant, we can conduct optimization on a sphere with scaling symmetry removed, which can help stabilize the training of network; 3) We prove that training with these normalization methods can make the norm of weights increase, which could cause adversarial vulnerability as it amplifies the attack. Finally, a series of experiments are conducted to verify these claims.Comment: Accepted by AAAI 202

    Neural Gradient Regularizer

    Full text link
    Owing to its significant success, the prior imposed on gradient maps has consistently been a subject of great interest in the field of image processing. Total variation (TV), one of the most representative regularizers, is known for its ability to capture the sparsity of gradient maps. Nonetheless, TV and its variants often underestimate the gradient maps, leading to the weakening of edges and details whose gradients should not be zero in the original image. Recently, total deep variation (TDV) has been introduced, assuming the sparsity of feature maps, which provides a flexible regularization learned from large-scale datasets for a specific task. However, TDV requires retraining when the image or task changes, limiting its versatility. In this paper, we propose a neural gradient regularizer (NGR) that expresses the gradient map as the output of a neural network. Unlike existing methods, NGR does not rely on the sparsity assumption, thereby avoiding the underestimation of gradient maps. NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion, making it a versatile and plug-and-play regularizer. Extensive experimental results demonstrate the superior performance of NGR over state-of-the-art counterparts for a range of different tasks, further validating its effectiveness and versatility
    corecore